Platform

Developers

Resources

Pricing

Frequently Asked Questions

A curated summary of the top questions asked on our Slack community, often relating to implementation, functionality, and building better products generally.
Statsig FAQs
Filter Applied:

Do permanent gates count towards billable events and how to launch a feature flag for a subset group without running up billable events?

Permanent gates do count towards billable events. An event is recorded when your application calls the Statsig SDK to check whether a user should be exposed to a feature gate or experiment, and this includes permanent gates. However, if a permanent gate is set to 'Launched' or 'Disabled', it will always return the default value and stop generating billable exposure events.

During the rollout or test period of a permanent gate, exposures will be collected and results will be measured. This is when the gate is billable. Once you Launch or Disable the gate, it is no longer billable. The differentiation with permanent gates is that it tells our system not to nudge you to clean it up, and that it will end up living in your codebase long term. More details can be found in the permanent and stale gates documentation.

If you want to launch a feature flag, but only set a subset group to true, you can achieve this with a Permanent, non-billable gate that targets a specific set of users. You can toggle off “Measure Metric Lifts”, but keep the gate enabled. You don’t need to click “Launch” using that other workflow.

Marking a gate as permanent effectively turns off billable events. This is a useful feature if you want to target a specific set of users without running up billable events.

Please note that we are continuously working on streamlining this process and improving the user experience. Your feedback is always appreciated.

How to send heads-up emails to users before exposing them to a new feature?

In order to notify users before they are exposed to a new feature, you can create a separate feature gate to control the rollout percentage and a segment to contain the account IDs that can be exposed to the feature. This approach allows you to effectively manage the rollout process and ensure that users are notified in advance.

The main feature gate will only pass for the accounts in the segment that also pass the separate feature gate. This provides a clear distinction between the users who are eligible and those who have been exposed to the feature.

Here's a brief overview of the process:

1. Create a main feature gate (rollout_feature_gate). The users that pass this gate will be exposed to the feature.

2. Create a separate feature gate (exposure_eligibility_gate) to control the rollout percentage. The users that pass this gate are the ones eligible to be exposed to the feature.

3. Create a segment (allegeable_accounts) that contains all the account IDs that can be exposed to the feature.

The rollout_feature_gate will return pass only for the accounts in the allegeable_accounts segment, while questioning the exposure_eligibility_gate. After a certain amount of time, export all account IDs in the exposure_eligibility_gate to the allegeable_accounts segments, and increase the exposure_eligibility_gate percentage.

This approach allows you to have a distinction between the eligible users that were exposed to the feature (allegeable_accounts segment) to the ones that are allegeable but potentially not yet exposed to it (exposure_eligibility_gate). You can manage additional rules and environment conditions in the main feature gate (rollout_feature_gate).

Remember to test your setup thoroughly in a pre-production environment before rolling it out to ensure everything works as expected.

Why am I seeing failures in gate evaluation on React Native despite the gate being set to 100% pass and Statsig being initialized?

In the event of encountering unexpected diagnostic results when evaluating a gate on React Native, despite the gate being set to 100% pass and Statsig being initialized, there are several potential causes to consider.

The "Uninitialized" status in evaluationDetails.reason can occur even after calling .initialize() and awaiting the result. This issue can be due to several reasons:

1. **Ad Blockers**: Ad blockers can interfere with the initialization network request, causing it to fail.

2. **Network Failures**: Any network issues that prevent the initialization network request from completing successfully can result in an "Uninitialized" status.

3. **Timeouts**: The statsig-js SDK applies a default 3-second timeout on the initialize request. This can lead to more frequent initialization timeouts on mobile clients where users may have slower connections.

If you encounter this issue, it's recommended to investigate the potential causes listed above. Check for the presence of ad blockers, network issues, and timeouts. This will help you identify the root cause and implement the appropriate solution.

It's worth noting that the initCalled: true value doesn't necessarily mean the initialization succeeded. It's important to check for any errors thrown from the initialization method.

If you're still experiencing issues, it might be helpful to use the debugging tools provided by Statsig. These tools can help you understand why a certain user got a certain value. For instance, you can check the diagnostics tab for higher-level pass/fail/bucketing population sizes over time, and for debugging specific checks, the logstream at the bottom is useful and shows both production and non-production exposures in near real-time.

One potential solution to this issue is to use the waitForInitialization option. When you don’t use this option, any of your components will be called immediately, regardless of if Statsig has initialized. This can result in 'Uninitialized' exposures. By setting waitForInitialization=true, you can defer the rendering of those components until after statsig has already initialized. This will guarantee the components aren’t called until initialize has completed, and you won’t see any of those ‘Uninitialized’ exposures being logged. You can find more details in the Statsig documentation.

However, if you can't use waitForInitialization due to it remounting the navigation stack resulting in a changed navigation state, you can check for initialization through initCompletionCallback.

You can also verify the initialization by checking the value of isLoading from useGate and useConfig and also initialized and initStarted from StatsigContext.If the issue persists, please reach out to the Statsig team for further assistance.

Why is there no exposure/checks data in the Diagnostics/Pulse Results tabs of the feature gate after launching?

If you're not seeing any exposure/checks data in the Diagnostics/Pulse Results tabs of the feature gate after launching, there are a few things you might want to check:

1. Ensure that your Server Secret Key is correct. You can find this in the Statsig console under Project Settings > API Keys. 2. Make sure that the name of the feature gate in your function matches exactly with the name of the feature gate you've created in the Statsig console. 3. Verify that the user ID is being correctly set and passed to the StatsigUser object. 4. Check if your environment tier matches the one you've set in the Statsig console.If all these are correct and you're still not seeing any data in the Diagnostics/Pulse Results tabs, it might be a technical issue on our end.

The Statsig SDK batches events and flushes them periodically as well as on shutdown or flush. If you are using the SDK in your middleware, it's recommended to call flush to guarantee events are flushed. For more information, refer to the Statsig documentation.

If you're still not seeing any data, it's possible that there's an issue with event compression. In some cases, disabling event compression can resolve the issue. However, this should be done with caution and only as a last resort, as it may impact performance.

If you're using a specific version of the SDK, you might want to consider downgrading to a previous version, such as v5.13.2, which may resolve the issue.

Remember, if you're still experiencing issues, don't hesitate to reach out to the Statsig team for further assistance.

Join the #1 Community for Product Experimentation

Connect with like-minded product leaders, data scientists, and engineers to share the latest in product experimentation.

Try Statsig Today

Get started for free. Add your whole team!

What builders love about us

OpenAI OpenAI
Brex Brex
Notion Notion
SoundCloud SoundCloud
Ancestry Ancestry
At OpenAI, we want to iterate as fast as possible. Statsig enables us to grow, scale, and learn efficiently. Integrating experimentation with product analytics and feature flagging has been crucial for quickly understanding and addressing our users' top priorities.
OpenAI
Dave Cummings
Engineering Manager, ChatGPT
Brex's mission is to help businesses move fast. Statsig is now helping our engineers move fast. It has been a game changer to automate the manual lift typical to running experiments and has helped product teams ship the right features to their users quickly.
Brex
Karandeep Anand
CPO
At Notion, we're continuously learning what our users value and want every team to run experiments to learn more. It’s also critical to maintain speed as a habit. Statsig's experimentation platform enables both this speed and learning for us.
Notion
Mengying Li
Data Science Manager
We evaluated Optimizely, LaunchDarkly, Split, and Eppo, but ultimately selected Statsig due to its comprehensive end-to-end integration. We wanted a complete solution rather than a partial one, including everything from the stats engine to data ingestion.
SoundCloud
Don Browning
SVP, Data & Platform Engineering
We only had so many analysts. Statsig provided the necessary tools to remove the bottleneck. I know that we are able to impact our key business metrics in a positive way with Statsig. We are definitely heading in the right direction with Statsig.
Ancestry
Partha Sarathi
Director of Engineering
We use cookies to ensure you get the best experience on our website.
Privacy Policy